40 research outputs found

    Tensor Factorization with Label Information for Fake News Detection

    Full text link
    The buzz over the so-called "fake news" has created concerns about a degenerated media environment and led to the need for technological solutions. As the detection of fake news is increasingly considered a technological problem, it has attracted considerable research. Most of these studies primarily focus on utilizing information extracted from textual news content. In contrast, we focus on detecting fake news solely based on structural information of social networks. We suggest that the underlying network connections of users that share fake news are discriminative enough to support the detection of fake news. Thereupon, we model each post as a network of friendship interactions and represent a collection of posts as a multidimensional tensor. Taking into account the available labeled data, we propose a tensor factorization method which associates the class labels of data samples with their latent representations. Specifically, we combine a classification error term with the standard factorization in a unified optimization process. Results on real-world datasets demonstrate that our proposed method is competitive against state-of-the-art methods by implementing an arguably simpler approach.Comment: Presented at the Workshop on Reducing Online Misinformation Exposure ROME 201

    Quantification of cardiac magnetic resonance imaging perfusion in the clinical setting at 3T

    Get PDF
    Dynamic contrast enhanced (DCE) cardiac magnetic resonance imaging (MRI) is well-established as a non-invasive method for qualitatively detecting obstructive coronary artery disease (CAD) which can impair myocardial blood flow and may result in myocardial infarction. Mathematical modelling of cardiac DCE-MRI data can provide quantitative assessment of myocardial blood flow. Quantitative assessment of myocardial blood flow may have merit in further stratification of patients with obstructive CAD and to improve the diagnosis and prognostication of the disease in the clinical setting. This thesis investigates the development of a quantitative analysis protocol for cardiac DCE-MRI data. In the first study presented in this thesis, Fermi and distributed parameter (DP) modelling are compared in single bolus versus dual bolus analysis. For model-based myocardial blood flow quantification, the convolution of a model with the arterial input function (i.e. contrast agent concentration-time curve extracted from the left ventricular cavity) is fitted to the tissue contrast agent concentration-time curve. In contrast to dual bolus DCE-MRI protocols, single bolus protocols reduce patient discomfort and acquisition protocol duration/complexity but, are prone to arterial input function saturation caused in the left ventricular cavity by the high concentration of contrast agent during bolus passage. Saturation effects can degrade the accuracy of quantification using Fermi modelling. The analysis presented in this study showed that DP modelling is less dependent on arterial input function saturation than Fermi modelling in eight healthy volunteers. In a pilot cohort of five patients, DP modelling detected for the first time reduced myocardial blood flow in all stenotic vessels versus standard clinical assessments. In the second study, it was investigated whether first-pass DP modelling can give accurate myocardial blood flow, against ideal values generated by numerical simulations. Unlike Fermi modelling which is convolved with only the first-pass range of the arterial input function, DP modelling is convolved with the entire contrast agent concentration-time course. In noisy and/or dual bolus data, it can be particularly challenging to identify the end point of the first-pass in the arterial input function. This study demonstrated that contrary to Fermi modelling, myocardial blood flow analysis using DP modelling does not depend on the number of time points used for fitting. Furthermore, this data suggests that DP modelling can reduce the quantitative variability caused by subjectivity in selection of the first-pass range in cardiac MR data. This in turn may help to facilitate the development of more automated software algorithms for myocardial blood flow quantification. In the third study, Fermi and DP modelling were compared against invasive clinical assessments and visual MR estimates, to assess their diagnostic ability in detecting obstructive CAD. A single bolus DCE-MRI protocol was implemented in twentyfour patients. In per vessel analysis, DP modelling reached superior sensitivity and negative predictive value in detecting obstructive CAD compared to Fermi modelling and visual estimates. In per patient analysis, DP modelling reached the highest sensitivity and negative predictive value in detecting obstructive CAD. These studies show that DP modelling analysis of cardiac single bolus DCE-MRI data can provide important functional information and can establish haemodynamic biomarkers to non-invasively improve the diagnosis and prognostication of obstructive CAD

    Laminar Newtonian jets at high Reynolds number and high surface tension

    Full text link
    No Abstract.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/37403/1/690340918_ftp.pd

    A multimodality cross-validation study of cardiac perfusion using MR and CT.

    Get PDF
    Modern advances in magnetic resonance (MR) and computed tomography (CT) perfusion imaging techniques have developed methods for myocardial perfusion assessment. However, individual imaging techniques present limitations that are possible to be surpassed by a multimodality cross-validation of perfusion imaging and analysis. We calculated the absolute myocardial blood flow (MBF) in MR using a Fermi function and the transmural perfusion ratio (TPR) in CT perfusion data in a patient with coronary artery disease (CAD). Comparison of MBF and TPR results showed good correlation emphasizing a promising potential to continue our multimodality perfusion assessment in a cohort of patients with CAD

    “Pharmacokinetic modelling for the simultaneous assessment of perfusion and Âč⁞F-flutemetamol uptake in cerebral amyloid angiopathy using a reduced PET-MR acquisition time: proof of concept.

    Get PDF
    Purpose Cerebral amyloid angiopathy (CAA) is a cerebral small vessel disease associated with perivascular ÎČ-amyloid deposition. CAA is also associated with strokes due to lobar intracerebral haemorrhage (ICH). 18F-flutemetamol amyloid ligand PET may improve the early detection of CAA. We performed pharmacokinetic modelling using both full (0–30, 90–120 min) and reduced (30 min) 18F-flutemetamol PET-MR acquisitions, to investigate regional cerebral perfusion and amyloid deposition in ICH patients. Methods Dynamic 18F-flutemetamol PET-MR was performed in a pilot cohort of sixteen ICH participants; eight lobar ICH cases with probable CAA and eight deep ICH patients. A model-based input function (mIF) method was developed for compartmental modelling. mIF 1-tissue (1-TC) and 2-tissue (2-TC) compartmental modelling, reference tissue models and standardized uptake value ratios were assessed in the setting of probable CAA detection. Results The mIF 1-TC model detected perfusion deficits and 18F-flutemetamol uptake in cases with probable CAA versus deep ICH patients, in both full and reduced PET acquisition time (all P < 0.05). In the reduced PET acquisition, mIF 1-TC modelling reached the highest sensitivity and specificity in detecting perfusion deficits (0.87, 0.77) and 18F-flutemetamol uptake (0.83, 0.71) in cases with probable CAA. Overall, 52 and 48 out of the 64 brain areas with 18F-flutemetamol-determined amyloid deposition showed reduced perfusion for 1-TC and 2-TC models, respectively. Conclusion Pharmacokinetic (1-TC) modelling using a 30 min PET-MR time frame detected impaired haemodynamics and increased amyloid load in probable CAA. Perfusion deficits and amyloid burden co-existed within cases with CAA, demonstrating a distinct imaging pattern which may have merit in elucidating the pathophysiological process of CAA

    Churn flow in high viscosity oils and large diameter columns

    Get PDF
    Churn flow is an important intermediate flow regimoccurring in between slug and annular flow patterns in two-phase flow, with profound implications in chemical and petroleum industry. The majority of studies to date in churn flow has been carried out mainly using water or liquids of low viscosities and limited information exists regarding the behaviour of high viscosity liquids which resemble realistic process conditions. In this paper, a study that investigated churn flow and its characteristics in high viscosity oils (360 and 330 Pa.s) and large diameter columns (240 and 290mm) is presented for a first time. Transition to churn flow regime starts when the structure velocity, length and frequency of the liquid bridges, which appear at the end of slug flow, increase. In churn flow, gas flows at the core of the oil column with a wavy passage, leaving the top surface open to atmosphere with a possibility of creating a very long bubble. The average length of the bubbles seen to decrease with increasing the gas flow rate. While, no considerable change is observed in void fraction, structure velocity and film thickness at this flow pattern
    corecore